Serveur d'exploration sur la musique en Sarre

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Towards Autonomous, Perceptive, and Intelligent Virtual Actors

Identifieur interne : 000E23 ( Main/Exploration ); précédent : 000E22; suivant : 000E24

Towards Autonomous, Perceptive, and Intelligent Virtual Actors

Auteurs : Daniel Thalmann [Suisse] ; Hansrudi Noser [Suisse]

Source :

RBID : ISTEX:B547EB8A97D3A736983442C9C1F8FB4BFA13E9BE

English descriptors

Abstract

Abstract: This paper explains methods to provide autonomous virtual humans with the skills necessary to perform stand-alone role in films, games and interactive television. We present current research developments in the Virtual Life of autonomous synthetic actors. After a brief description of our geometric, physical, and auditory Virtual Environments, we introduce the perception action principles with a few simple examples. We emphasize the concept of virtual sensors for virtual humans. In particular, we describe our experiences in implementing virtual sensors such as vision sensors, tactile sensors, and hearing sensors. We then describe knowledge-based navigation, knowledge-based locomotion and in more details sensor-based tennis.

Url:
DOI: 10.1007/3-540-48317-9_19


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI wicri:istexFullTextTei="biblStruct:series">
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Towards Autonomous, Perceptive, and Intelligent Virtual Actors</title>
<author>
<name sortKey="Thalmann, Daniel" sort="Thalmann, Daniel" uniqKey="Thalmann D" first="Daniel" last="Thalmann">Daniel Thalmann</name>
</author>
<author>
<name sortKey="Noser, Hansrudi" sort="Noser, Hansrudi" uniqKey="Noser H" first="Hansrudi" last="Noser">Hansrudi Noser</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:B547EB8A97D3A736983442C9C1F8FB4BFA13E9BE</idno>
<date when="1999" year="1999">1999</date>
<idno type="doi">10.1007/3-540-48317-9_19</idno>
<idno type="url">https://api.istex.fr/document/B547EB8A97D3A736983442C9C1F8FB4BFA13E9BE/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">001285</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Corpus" wicri:corpus="ISTEX">001285</idno>
<idno type="wicri:Area/Istex/Curation">001194</idno>
<idno type="wicri:Area/Istex/Checkpoint">000C11</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Checkpoint">000C11</idno>
<idno type="wicri:doubleKey">0302-9743:1999:Thalmann D:towards:autonomous:perceptive</idno>
<idno type="wicri:Area/Main/Merge">000E24</idno>
<idno type="wicri:Area/Main/Curation">000E23</idno>
<idno type="wicri:Area/Main/Exploration">000E23</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title level="a" type="main" xml:lang="en">Towards Autonomous, Perceptive, and Intelligent Virtual Actors</title>
<author>
<name sortKey="Thalmann, Daniel" sort="Thalmann, Daniel" uniqKey="Thalmann D" first="Daniel" last="Thalmann">Daniel Thalmann</name>
<affiliation wicri:level="3">
<country xml:lang="fr">Suisse</country>
<wicri:regionArea>Computer Graphics Lab, EPFL - LIG, Lausanne</wicri:regionArea>
<placeName>
<settlement type="city">Lausanne</settlement>
<region nuts="3" type="region">Canton de Vaud</region>
</placeName>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Suisse</country>
</affiliation>
</author>
<author>
<name sortKey="Noser, Hansrudi" sort="Noser, Hansrudi" uniqKey="Noser H" first="Hansrudi" last="Noser">Hansrudi Noser</name>
<affiliation wicri:level="4">
<orgName type="university">Université de Zurich</orgName>
<country>Suisse</country>
<placeName>
<settlement type="city">Zurich</settlement>
<region nuts="3" type="region">Canton de Zurich</region>
</placeName>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Suisse</country>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series>
<title level="s">Lecture Notes in Computer Science</title>
<title level="s" type="sub">Lecture Notes in Artificial Intelligence</title>
<imprint>
<date>1999</date>
</imprint>
<idno type="ISSN">0302-9743</idno>
<idno type="ISSN">0302-9743</idno>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<idno type="ISSN">0302-9743</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="Teeft" xml:lang="en">
<term>Acoustic environment</term>
<term>Actor</term>
<term>Actor memorizes</term>
<term>Actual position</term>
<term>Angular speed</term>
<term>Animation</term>
<term>Animation system</term>
<term>Artificial fishes</term>
<term>Artificial life</term>
<term>Automatic derivation</term>
<term>Automaton</term>
<term>Autonomous</term>
<term>Autonomous actor</term>
<term>Autonomous actors</term>
<term>Autonomous agents</term>
<term>Behavior control</term>
<term>Behavioral</term>
<term>Behavioral animation</term>
<term>Behavioral model</term>
<term>Behavioral response</term>
<term>Boulic</term>
<term>Collision</term>
<term>Color coding</term>
<term>Complex environments</term>
<term>Computer animation</term>
<term>Computer graphics</term>
<term>Current position</term>
<term>Current velocity</term>
<term>Curvature cost</term>
<term>Digital actors</term>
<term>Expressive animation</term>
<term>Force field model</term>
<term>Force fields</term>
<term>Game strategy</term>
<term>Geometrical collision detection</term>
<term>Global</term>
<term>Global force field</term>
<term>Global navigation</term>
<term>Graphics</term>
<term>Hearing sensor</term>
<term>High level behavior</term>
<term>Ieee computer society press</term>
<term>Impact point</term>
<term>Interactive</term>
<term>Interactive television</term>
<term>Interactive user</term>
<term>Internal state</term>
<term>Local navigation</term>
<term>Local navigation algorithm</term>
<term>Magnenat</term>
<term>Magnenat thalmann</term>
<term>Modeling</term>
<term>Module</term>
<term>Navigation</term>
<term>Next automaton</term>
<term>Noser</term>
<term>Other objects</term>
<term>Other particles</term>
<term>Parameter space</term>
<term>Particle dynamics</term>
<term>Particle system</term>
<term>Pixel</term>
<term>Production rules</term>
<term>Propagation medium</term>
<term>Sensor</term>
<term>Sensor points</term>
<term>Simple method</term>
<term>Simulation</term>
<term>Sound event</term>
<term>Sound event handler</term>
<term>Sound event table</term>
<term>Sound events</term>
<term>Sound library</term>
<term>Sound sources</term>
<term>Sparse foothold locations</term>
<term>Special functions</term>
<term>Speech recognition</term>
<term>State variables</term>
<term>Synthetic actor</term>
<term>Synthetic sensors</term>
<term>Synthetic vision</term>
<term>Tennis court</term>
<term>Tennis game</term>
<term>Thalmann</term>
<term>Time step</term>
<term>Tlie</term>
<term>Touch sensors</term>
<term>Trajectory</term>
<term>Turtle position</term>
<term>Unexpected obstacles</term>
<term>View angle</term>
<term>Virtual</term>
<term>Virtual actors</term>
<term>Virtual environment</term>
<term>Virtual environments</term>
<term>Virtual humans</term>
<term>Virtual life</term>
<term>Virtual reality</term>
<term>Virtual sensors</term>
<term>Virtual vision</term>
<term>Virtual world</term>
<term>Virtual worlds</term>
<term>Vision state</term>
<term>Vision system</term>
<term>Vision window</term>
<term>Visual computer</term>
<term>Visual memory</term>
<term>Wind force fields</term>
<term>World modeling</term>
</keywords>
</textClass>
<langUsage>
<language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Abstract: This paper explains methods to provide autonomous virtual humans with the skills necessary to perform stand-alone role in films, games and interactive television. We present current research developments in the Virtual Life of autonomous synthetic actors. After a brief description of our geometric, physical, and auditory Virtual Environments, we introduce the perception action principles with a few simple examples. We emphasize the concept of virtual sensors for virtual humans. In particular, we describe our experiences in implementing virtual sensors such as vision sensors, tactile sensors, and hearing sensors. We then describe knowledge-based navigation, knowledge-based locomotion and in more details sensor-based tennis.</div>
</front>
</TEI>
<affiliations>
<list>
<country>
<li>Suisse</li>
</country>
<region>
<li>Canton de Vaud</li>
<li>Canton de Zurich</li>
</region>
<settlement>
<li>Lausanne</li>
<li>Zurich</li>
</settlement>
<orgName>
<li>Université de Zurich</li>
</orgName>
</list>
<tree>
<country name="Suisse">
<region name="Canton de Vaud">
<name sortKey="Thalmann, Daniel" sort="Thalmann, Daniel" uniqKey="Thalmann D" first="Daniel" last="Thalmann">Daniel Thalmann</name>
</region>
<name sortKey="Noser, Hansrudi" sort="Noser, Hansrudi" uniqKey="Noser H" first="Hansrudi" last="Noser">Hansrudi Noser</name>
<name sortKey="Noser, Hansrudi" sort="Noser, Hansrudi" uniqKey="Noser H" first="Hansrudi" last="Noser">Hansrudi Noser</name>
<name sortKey="Thalmann, Daniel" sort="Thalmann, Daniel" uniqKey="Thalmann D" first="Daniel" last="Thalmann">Daniel Thalmann</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Sarre/explor/MusicSarreV3/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000E23 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 000E23 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Sarre
   |area=    MusicSarreV3
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     ISTEX:B547EB8A97D3A736983442C9C1F8FB4BFA13E9BE
   |texte=   Towards Autonomous, Perceptive, and Intelligent Virtual Actors
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Sun Jul 15 18:16:09 2018. Site generation: Tue Mar 5 19:21:25 2024